9 - 25.6. Computational Learning Theory (Part 1) [ID:30377]
50 von 240 angezeigt

Remember, machine learning is just optimization of funny

Functions. Over hypothesis spaces.

And what you want to keep in mind is that these hypothesis

Spaces are quite big. Typically very big.

High dimensional, continuous, all the stuff we don't like.

That's the hypothesis spaces. So one of the things when

Engineers are starting to make learning programs, the not

Quite so engineers ask the question, wait, what are you

Doing here? can this work at all?

And then you ask yourself, if we have a learning algorithm, how

Can we learn something we don't know?

How can we be sure that the hypothesis we will get actually

Predict the future well enough?

That's the philosophical question. How can we learn at all?

Or formally, given a hypothesis age and a target function f we

Don't know, how can we be sure that they are close?

And of course, we by now know enough to ask a couple more

Questions, right? how many examples do we need to

Show the learning algorithm to get a good age?

What hypothesis space should we use?

If we have a very complex and big hypothesis space, which is

Good because it guarantees that our problem is realizable, how

Can we make sure that it isn't littered full of local maxima

And we're very happily in some kind of a local maximum or

Minimum and where we should really be there, where the

Global maximum or minimum is?

How complex should a hypothesis be?

We have seen we have these algorithms that kind of penalize

Complex solutions, right? but they don't answer.

They give us results, but they don't answer us, is this just

Some random complexity or is it actually near the optimum?

We don't know. How do we avoid overfitting?

That's really what this computational learning theory

Does and it kind of attacks that problem with theoretical

Computer science methods and of course statistics and a.I.

And so on, but really mostly theoretical computer science.

And there's a very simple idea which leads to a nice

Three-letter name. The idea is that if you have

Enough examples such that if you have a seriously wrong

Hypothesis, not just one that is a good approximation, but

Something that's really, really terrible for which much better

Ones exist and if you look at how this hypothesis fairs under

Incoming examples, then all the seriously wrong ones will

Actually be found out sooner or later.

That's the idea we're going to explore here.

If it's bad, it will be found. Or what we're really interested

In, if it survives long enough, it's good.

It can't be bad. Okay?

And can't be bad has a wonderful name here.

It's probably approximately correct. Or pack.

Teil eines Kapitels:
Chapter 25. Learning from Observations

Zugänglich über

Offener Zugang

Dauer

00:25:36 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 16:57:47

Sprache

en-US

The idea of PAC Learning is explained. Also, sample complexity and how to escape it is discussed. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen